Shen Li

I am a Ph.D. candidate at MIT CSAIL, where I am advised by Prof. Julie Shah. I hold an M.S. in Robotics from Carnegie Mellon University, where I was co-advised by Prof. Siddhartha Srinivasa and Prof. Stephanie Rosenthal, and dual B.S. degrees in Computer Science and Psychology from the Pennsylvania State University.

I am on the 2024-2025 Academic Job Market!
Here is my research statement.

shenli@mit.edu  |  CV  |  Google Scholar  |  Twitter  |  LinkedIn  |  Github

profile photo
Research Overview

I aim to enable robots to continuously adapt their assistance to users' evolving needs. My research focuses on personalizing robot assistance through human feedback. When humans interact with robots, human behaviors unintentionally reveal preferences, just as nonverbal cues show intentions. My goal is to enable robots to interpret these subtle cues, or implicit human feedback, to offer more personalized assistance.

Drawing from psychology, I develop algorithms that help robots interpret implicit human feedback, understand cognitive and behavioral patterns, identify user preferences and intentions, and adapt robot assistance. My work has been demonstrated in real-world applications, such as robot-assisted dressing and collaborative industrial assembly. I also led the deployment of three research projects as permanent exhibits at the MIT Museum.

My research spans several key areas, including:

Below, * denotes equal contribution or alphabetical ordering.

1. Revealing User Cognitive Processes for Efficient Preference Learning
figure
Enhancing Preference-based Linear Bandits via Human Response Time
  • Shen Li*, Yuyang Zhang*, Zhaolin Ren, Claire Liang, Na Li, Julie A. Shah
  • Advances in Neural Information Processing Systems (NeurIPS) (2024)
    (Oral presentation)
  • PDF
  • The key contribution is the first preference-based bandits algorithm to incorporate human response times as implicit feedback. By interpreting response times using the EZ-diffusion model from psychology, this work both theoretically and empirically demonstrates that response times provide valuable insights into preference strength, significantly improving learning efficiency. This research lays the foundation for future advancements in robot personalization, recommender systems, and fine-tuning large language models (LLMs).
2. Anticipating User Behavior for Safe Assistance
figure
Set-based State Estimation with Probabilistic Consistency Guarantee
under Epistemic Uncertainty
  • Shen Li*, Theodoros Stouraitis*, Michael Gienger, Sethu Vijayakumar, Julie A. Shah
  • IEEE Robotics and Automation Letters (RA-L) (2022) (Impact factor: 5.2)
  • PDF| Video | MIT News | MIT Instagram | MIT Technology Review Brazil
  • The key contribution is the first set-based state estimator that guarantees probabilistic consistency under nonlinear dynamic and observation models learned from offline datasets. We theoretically and empirically demonstrate that accounting for uncertainty in learning errors improves estimation performance. This approach has been successfully applied to interpret human sensor signals as implicit feedback to estimate latent human states in a robot-assisted dressing task.
figure
Provably Safe and Efficient Motion Planning with Uncertain Human Dynamics
figure
Safe and Efficient High Dimensional Motion Planning in Space-Time with Time Parameterized Prediction
  • Shen Li, Julie A. Shah
  • IEEE International Conference on Robotics and Automation (ICRA) (2019)
    (Acceptance rate: 45%)
  • PDF | Poster
  • This paper introduces a motion planner that constructs a roadmap to efficiently approximate the high-dimensional robot configuration and time space, enabling faster planning for robots to avoid dynamic obstacles. Empirical results show that our method consistently produces collision-free, efficient trajectories with significantly reduced planning times.
3. Anticipating User Intentions for Efficient Coordination
figure
Semi-Supervised Learning of Decision-Making Models for Human-Robot Collaboration
  • Vaibhav V. Unhelkar*, Shen Li*, Julie A. Shah
  • Conference on Robot Learning (CoRL) (2020)
    (Oral presentation, acceptance rate: 5%)
  • PDF | Video (4:16)
  • The key contribution is the first personalization framework for multi-step collaborative tasks that hierarchically models human implicit feedback as subgoals and motion, learned offline in a semi-supervised manner without fully labeled subgoals. During online interaction, the robot efficiently adapts to unobserved, evolving human subgoals. Empirical results demonstrate that our framework eliminates the need for labeled subgoals, offers flexibility in specifying task structures, and does so without compromising performance.
figure
Decision-Making for Bidirectional Communication in Sequential Human-Robot Collaborative Tasks
  • Vaibhav V. Unhelkar*, Shen Li*, Julie A. Shah
  • ACM/IEEE International Conference on Human-Robot Interaction (HRI) (2020)
    (Acceptance rate: 23.6%)
  • PDF | Video (2:16) | Talk at the conference (9:27) | ZDNet news
  • The key contribution is the first personalization framework for multi-step collaborative tasks that hierarchically leverages human behavior as implicit feedback and uses verbal communication as explicit feedback. Our empirical results show that, with both types of feedback, the system appropriately decides if, when, and what to communicate to the human.
figure
figure
Fast Online Segmentation of Activities from Partial Trajectories
  • Tariq Iqbal, Shen Li, Christopher Fourie, Bradley Hayes, Julie A. Shah
  • IEEE International Conference on Robotics and Automation (ICRA) (2019)
    (Acceptance rate: 45%)
  • PDF | Video (2:49) | Poster | PBS NewsHour (from 2:58)
  • The key contribution is an activity recognition algorithm that identifies activity labels using partial human trajectories as implicit feedback before the activities are fully completed, with human behavior modeled as an ensemble of Gaussian Mixture Models. Our empirical results show that this approach significantly enhances activity recognition accuracy, even when only incomplete trajectory data is available.
4. Robust Robot Planning Under Unpredictable User Behavior
figure
Temporal Logic Imitation: Learning Plan-Satisficing Motion Policies from Demonstrations
  • Yanwei Wang, Nadia Figueroa, Shen Li, Ankit Shah, Julie A. Shah
  • Conference on Robot Learning (CoRL) (2023)
    (Oral presentation, acceptance rate: 6.5%)
  • PDF | Webpage | Code | PBS NewsHour
  • The key contribution is the first multi-modal policy learning framework that provides theoretical guarantees of robustness against both task- and motion-level human interventions, without requiring predefined mode boundaries. By combining offline imitation learning with online policy adaptation, the learned policy ensures that a robot remains within the correct mode boundaries despite human interventions, leading to a significantly higher task success rate in empirical evaluations.
figure
figure
Learning Plan-Satisficing Motion Policies from Demonstrations
figure
Reactive Task and Motion Planning under Temporal Logic Specifications
  • Shen Li*, Daehyung Park*, Yoonchang Sung*, Julie A. Shah, Nicholas Roy
  • IEEE International Conference on Robotics and Automation (ICRA) (2021)
    (Acceptance rate: 43.59%)
  • PDF | Video (2:28) | Slides
  • The key contribution is a hierarchical task-and-motion planning algorithm that efficiently adapts to various human interventions by engaging only the necessary planning layer, avoiding full replanning. By integrating linear temporal logic, incremental graph search, behavior trees, and motion primitives, the system empirically demonstrates efficient adaptation to interventions like object relocation, addition, and removal in both simulated and real-world environments.
5. Interpreting User Demonstrations as Linear Temporal Logic Formulas
figure
Planning With Uncertain Specifications (PUnS)
  • Ankit Shah, Shen Li, Julie A. Shah
  • IEEE Robotics and Automation Letters (RA-L) (2020) (Impact factor: 3.7)
  • PDF | Video (2:11) | MIT News
  • The key contribution is the first RL planning algorithm that addresses non-Markovian and uncertain objectives, represented as a belief distribution over linear temporal logic formulas. This approach allows robots to plan behaviors based on task specifications learned from potentially noisy human demonstration data.
figure
figure
Supervised Bayesian Specification Inference from Demonstrations
  • Ankit Shah, Pritish Kamath, Shen Li, Patrick Craven, Kevin Landers,
    Kevin Oden, Julie A. Shah
  • The International Journal of Robotics Research (IJRR) (2023) (Impact factor: 9.2)
  • PDF
  • This journal paper builds on our earlier conference work (see below) by introducing the ability to learn task specifications both inductively, from positive examples only, and from a combination of positive and negative examples. Additionally, the journal version expands the empirical evaluation to a multi-agent domain.
figure
Bayesian Inference of Temporal Task Specifications from Demonstrations
  • Ankit Shah, Pritish Kamath, Shen Li, Julie A. Shah
  • Advances in Neural Information Processing Systems (NeurIPS) (2018)
    (Acceptance rate: 20.78%)
  • PDF | Webpage | Video (3:05) | Poster
  • The key contribution is one of the first algorithms for task specification learning that accurately infers linear temporal logic formulas from potentially noisy human demonstration data, while incorporating uncertainty estimates to enhance the reliability of the inference.
6. Studying Human Factors
figure
Trust of Humans in Supervisory Control of Swarm Robots with Varied Levels of Autonomy
  • Changjoo Nam, Huao Li, Shen Li, Michael Lewis, Katia Sycara
  • IEEE International Conference on Systems, Man, and Cybernetics (SMC) (2018)
  • PDF
  • Trust and performance in human supervisory control of swarm robots with varied levels of autonomy.
figure
Evaluating Critical Points in Trajectories
figure
Natural Language Instructions for Human–Robot Collaborative Manipulation
figure
Spatial References and Perspective in Natural Language Instructions for Collaborative Manipulation Perspective in Natural Language Instructions for Collaborative Manipulation
Theses
figure
Automatically Evaluating and Generating Clear Robot Explanations

Webpage design courtesy of Jon Barron
Accessibility